11 - MLPDES25: Replicator dynamics as the large-population limit of a multi-strategy discrete Moran process [ID:57447]
50 von 286 angezeigt

Thank you, Nico, and thanks for inviting me to this conference.

Sorry, I have a very low tone of voice, so I'll do my best to put some frequencies out

of my vocal cords.

So as Nico anticipated, I will talk about a result obtaining collaboration with Marco

Morandotti from Politecnico di Torino.

It's about the replicator dynamics.

It's an observation that we made to have better insight on this equation.

In particular, we derive it as a large population limit of a discrete process.

But before going into the detail of the result we obtained, since I don't know if everyone

knows this replicator equation, I want to give an introduction to this equation.

So the replicator dynamics is described by a system of ODE's.

The unknown is a vector lambda of N components, lambda one, lambda M.

The equation can be defined on an infinite interval, an unbounded interval of times.

In this talk I'm interested on an interval 0t, 40 fixed.

It can be as large as you want.

And the equation describes how each component of this vector moves.

So the derivative of Ith component is equal to the Ith component itself times something.

And something depends on some parameters.

All these parameters are collected in N times M matrix.

And the right hand side is given by A times the vector lambda, taking the Ith component

and removing lambda transpose times A times lambda.

Okay, besides the algebra there is a meaning on this term.

So my first objective is to explain what is this doing, the right hand side is doing.

Of course this is a Cauchy problem, it's coupled with some initial conditions on the vector lambda.

So let's understand what the right hand side in the equation is doing.

And first of all let's do a calculus exercise, you can give this to your students.

Let's see what happens to the sum of the components lambda I of the vector lambda.

So if we sum the equations, we obtain an equation for the sum of all the components.

We get the derivative of the sum of the components, we get the sum of lambda I A lambda Ith component

minus that one lambda transpose A lambda is a constant with respect to I, so the sum of the lambda I's.

So for the first term we observe that this is simply again lambda transpose A times lambda.

So I can factor this out and get one minus the sum of the components.

And notice that now we observe that the point one is an equilibrium point for the differential

equation solved by the sum of the components.

This means that if at the initial time the sum of the components is one, this will be true for all times during the equation.

And similar tricks of ODE's, notice that if I divide by lambda I, this is an equation for the logarithm of lambda.

So I can write lambda I as the initial condition times an exponential.

This means that if the initial condition is zero, it will stay zero for one component.

If the initial condition is zero, it will stay zero for all times. If it is positive, it will stay positive for all times.

So this means that the equation can be interpreted.

So the relevant setting is when the initial conditions for the vector are such that the sum is one and the components are non-negative.

So this means that the equation is in fact invariant on the M minus one dimensional simplex,

which is the M minus one dimensional affine space, compact space, given by vectors whose components are positive and they sum to one.

In the picture you see the two simplex drawn in three dimensions.

So if we take one initial condition on the simplex, the curve that describes ODE stays on the simplex.

If you want, we can rewrite this, say this by saying that if we write, if we collect the right hand side in a vector that I will call B in the rest of the talk,

so the components of B are simply the components in the right hand side of the equation,

the vector field B that defines the ODE system is such that the sum of all its components are zero because it's a tangent vector to this flat manifold.

So what does this mean?

So if we interpret the simplex as a set of probability measures in the sense that each component i measures a probability,

Presenters

Prof. Gianluca Orlando Prof. Gianluca Orlando

Zugänglich über

Offener Zugang

Dauer

00:33:32 Min

Aufnahmedatum

2025-04-29

Hochgeladen am

2025-04-29 17:01:11

Sprache

en-US

#MLPDES25 Machine Learning and PDEs Workshop 
Mon. – Wed. April 28 – 30, 2025
HOST: FAU MoD, Research Center for Mathematics of Data at FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg Erlangen – Bavaria (Germany)
 
SPEAKERS 
• Paola Antonietti. Politecnico di Milano
 • Alessandro Coclite. Politecnico di Bari
 • Fariba Fahroo. Air Force Office of Scientific Research
 • Giovanni Fantuzzi. FAU MoD/DCN-AvH, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Borjan Geshkovski. Inria, Sorbonne Université
 • Paola Goatin. Inria, Sophia-Antipolis
 • Shi Jin. SJTU, Shanghai Jiao Tong University 
 • Alexander Keimer. Universität Rostock
 • Felix J. Knutson. Air Force Office of Scientific Research
 • Anne Koelewijn. FAU MoD, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Günter Leugering. FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Lorenzo Liverani. FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Camilla Nobili. University of Surrey
 • Gianluca Orlando. Politecnico di Bari
 • Michele Palladino. Università degli Studi dell’Aquila
 • Gabriel Peyré. CNRS, ENS-PSL
 • Alessio Porretta. Università di Roma Tor Vergata
 • Francesco Regazzoni. Politecnico di Milano
 • Domènec Ruiz-Balet. Université Paris Dauphine
 • Daniel Tenbrinck. FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Daniela Tonon. Università di Padova
 • Juncheng Wei. Chinese University of Hong Kong
 • Yaoyu Zhang. Shanghai Jiao Tong University
 • Wei Zhu. Georgia Institute of Technology
 
SCIENTIFIC COMMITTEE 
• Giuseppe Maria Coclite. Politecnico di Bari
• Enrique Zuazua. FAU MoD/DCN-AvH, Friedrich-Alexander-Universität Erlangen-Nürnberg
 
ORGANIZING COMMITTEE 
• Darlis Bracho Tudares. FAU MoD/DCN-AvH, Friedrich-Alexander-Universität Erlangen-Nürnberg
• Nicola De Nitti. Università di Pisa
• Lorenzo Liverani. FAU DCN-AvH, Friedrich-Alexander-Universität Erlangen-Nürnberg
 
Video teaser of the #MLPDES25 Workshop: https://youtu.be/4sJPBkXYw3M
 
 
#FAU #FAUMoD #MLPDES25 #workshop #erlangen #bavaria #germany #deutschland #mathematics #research #machinelearning #neuralnetworks

Tags

Erlangen mathematics Neural Network PDE Applied Mathematics FAU MoD Partial Differential Equations Bavaria Machine Learning FAU MoD workshop FAU
Einbetten
Wordpress FAU Plugin
iFrame
Teilen